Ethics of Adversarial Machine Learning and Data Poisoning

نویسندگان

چکیده

This paper investigates the ethical implications of using adversarial machine learning for purpose obfuscation. We suggest that attacks can be justified by privacy considerations but they also cause collateral damage. To clarify matter, we employ two use cases—facial recognition and medical learning—to evaluate damage counterarguments to privacy-induced attacks. conclude obfuscation data poisoning in facial not case. motivate our conclusion employing psychological arguments about change, considerations, limitations on applications.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial and Secure Machine Learning

The advance of machine learning has enabled establishments of many automatic systems, leveraging its outstanding predictive power. From face recognition to recommendation systems and to social network relationship mining, machine learning found its rising attention from both researchers and practitioners in many different domains. Data-driven technologies based on machine learning facilitate th...

متن کامل

Foundations of Adversarial Machine Learning

As classifiers are deployed to detect malicious behavior ranging from spam to terrorism, adversaries modify their behaviors to avoid detection (e.g., [4, 3, 6]). This makes the very behavior the classifier is trying to detect a function of the classifier itself. Learners that account for concept drift (e.g., [5]) are not sufficient since they do not allow the change in concept to depend on the ...

متن کامل

Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model’s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on cle...

متن کامل

Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach

The evolution of mobile malware poses a serious threat to smartphone security. Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as Drebin and DroidAPIMiner) ineffective. In this paper, we explore the feasibility of constructing crafted malware samp...

متن کامل

Cleverhans V0.1: an Adversarial Machine Learning Library

cleverhans is a software library that provides standardized reference implementations of adversarial example construction techniques and adversarial training. The library may be used to develop more robust machine learning models and to provide standardized benchmarks of models’ performance in the adversarial setting. Benchmarks constructed without a standardized implementation of adversarial e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Digital Society

سال: 2023

ISSN: ['2731-4650', '2731-4669']

DOI: https://doi.org/10.1007/s44206-023-00039-1